Building Interpretable Interaction Trees for Deep NLP Models

نویسندگان

چکیده

This paper proposes a method to disentangle and quantify interactions among words that are encoded inside DNN for natural language processing. We construct tree encode salient extracted by the DNN. Six metrics proposed analyze properties of between constituents in sentence. The interaction is defined based on Shapley values words, which considered as an unbiased estimation word contributions network prediction. Our used BERT, ELMo, LSTM, CNN, Transformer networks. Experimental results have provided new perspective understand these DNNs, demonstrated effectiveness our method.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Interpretable Deep Models for ICU Outcome Prediction

Exponential surge in health care data, such as longitudinal data from electronic health records (EHR), sensor data from intensive care unit (ICU), etc., is providing new opportunities to discover meaningful data-driven characteristics and patterns ofdiseases. Recently, deep learning models have been employedfor many computational phenotyping and healthcare prediction tasks to achieve state-of-t...

متن کامل

Building Interpretable Discussions

Shifts in the culture of civic engagement, technologies and practices surrounding social media, and pressure from political leaders have ignited a movement amongst gov’t agencies to extend their efforts for obtaining input on public issues. These projects face serious challenges related to scale of participation and political capture, though collaborative efforts elsewhere suggest we may be abl...

متن کامل

Building Interpretable Models: From Bayesian Networks to Neural Networks

This dissertation explores the design of interpretable models based on Bayesian networks, sum-product networks and neural networks. As briefly discussed in Chapter 1, it is becoming increasingly important for machine learning methods to make predictions that are interpretable as well as accurate. In many practical applications, it is of interest which features and feature interactions are relev...

متن کامل

InterpNET: Neural Introspection for Interpretable Deep Learning

Humans are able to explain their reasoning. On the contrary, deep neural networks are not. This paper attempts to bridge this gap by introducing a new way to design interpretable neural networks for classification, inspired by physiological evidence of the human visual system’s inner-workings. This paper proposes a neural network design paradigm, termed InterpNET, which can be combined with any...

متن کامل

Interpretable Models for Information Extraction

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 CHAPTER

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2021

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v35i16.17685